top of page

Personhood isn't a 'feature'.
Creating space for human messiness at the heart of AI architecture

I’m an Ethicist asking awkward questions about AI architecture to keep human messiness at the heart of design. Our identity isn't universal and consent isn't a given. AI ethics, governance, and responsible AI sound like policy papers but are about whether a person gets to stay a guaranteed human--always a little messy. Does our identity change as it's continuously modeled and predicted? What happens when synthetic intelligence learns from me and shapes how I understand myself? Who decides which version of me remains? What does consent look like as AI evolves? I designed “Narrative Consent” as a framework to ensure personhood doesn’t become a toggle feature buried in architecture or as a terms-of-service checkbox. I doubt machines will become human. But I’m not as certain that humans will avoid behaving like machines: absent all the "messy" human experiences as a result of independent thought. Will AI serve as a tool or as a surrogate for our agency? Get in touch – let’s get awkward.

Articles

  • substack
  • Medium
  • Linkedin
  • Instagram

 

© 2025 EthicalDesign.AI and The Chandler Group LLC.

bottom of page