Discussion about this post

User's avatar
Rohit Kamath's avatar

Yes, asymmetric intelligence can bias one in favour of certain actions. In neuroscience, we see this constantly; the public 'consensus' on disease pathology often lags years behind what’s being discovered in private labs or failed clinical trials. If pharma is sitting on the 'missing pieces' of the map, one could surmise that their AI would appear to 'break' when compared to public literature, it is working with a more complete puzzle than the rest of us.

Looking forward to that post on biological reasoning.

Im curious if you think those 'asymmetric' data moats are the key to teaching AI to reason, or if we still need a shift in how the models actually 'think' about the data they have.

Rohit Kamath's avatar

"whoever helps scientific organizations encode their own ontology, their own context, and their own judgment without surrendering them to a vendor-controlled worldview."

Are you talking about personalised AIs per organisation trained on an internal database, judgement, and context?

Someone who builds an AI or a shell

easily ingestable into a scientific org?

How would that work?

Great article!

2 more comments...

No posts

Ready for more?