John Smith's picture
In a Training Loop 🔄

John Smith PRO

John6666

AI & ML interests

None yet

Recent Activity

reacted to oncody's post with 👀 about 2 hours ago
Are Large Language Models actually becoming more intelligent, or just better at seeming intelligent? There is a noticeable shift happening in the LLM space. Models today can: Generate cleaner and more structured code. Explain complex topics in simpler ways. Maintain longer and more coherent conversations. Yet at the same time, they still: Produce confident hallucinations. Fail in multi-step reasoning tasks. Break under slightly unfamiliar or challenging inputs. This raises a critical question. Are we advancing intelligence, or optimizing presentation? Most improvements so far seem driven by: Larger datasets. Increased scale. Alignment techniques like RLHF. But these do not necessarily lead to genuine reasoning ability. What still appears fundamentally missing: Persistent memory across interactions. True reasoning rather than pattern completion. Grounded understanding connected to real-world context. Reliable self-correction and verification mechanisms. If current scaling trends start to plateau, the next breakthrough will not come from doing more of the same. So the real question for the community is: If you were designing the next generation of AI systems, where would you focus? A. Larger models and compute B. Higher-quality and structured data C. Agent-based systems with tool use and memory D. New architectures beyond transformers This is not just a technical discussion. It defines where AI is actually heading over the next few years. I am interested to hear how others are thinking about this.
reacted to kanaria007's post with 🧠 about 2 hours ago
✅ Article highlight: *CompanionOS Under SI-Core* (art-60-053, v0.1) TL;DR: This article is *not* “CityOS for daily life.” It treats personal-scale SI as a *governance kernel + protocols + auditability layer*: what the system is, what it must guarantee, and what the user can verify. The key difference from a generic “personal AI” is simple: the human is the principal, the goals are plural and changing, and *the human must retain veto power*. CompanionOS is the runtime that makes that structurally enforceable. Read: https://huggingface.co/datasets/kanaria007/agi-structural-intelligence-protocols/blob/main/article/60-supplements/art-60-053-companion-os-under-si-core.md Why it matters: • makes personal AI accountable to the person, not to hidden service KPIs • turns cross-domain memory into something the user can govern • makes “why this jump?” structurally inspectable instead of vibe-based • treats consent as a runtime object, not a UI checkbox • keeps apps, devices, and providers visible as explicit principals/roles, not silent integrations What’s inside: • *CompanionOS* as a personal SI-Core runtime with OBS / Jump / ETH / RML + SIM/SIS + audit UI • modular personal *GoalSurfaces* for health, learning, finance, and other life domains • user override, refusal, veto, and inspectability patterns • degraded/offline mode with tighter constraints and reduced action scope • consent receipts, connector manifests, and policy bundles as exportable governance artifacts • a model of personal SI as a *kernel*, not just an app or chat wrapper Key idea: CompanionOS is not “an assistant that runs your life.” It is a *user-owned governance runtime for decisions, memory, and consent*.
View all activity

Organizations

Glide's profile picture open/ acc's profile picture mekasiu's profile picture Solving Real World Problems's profile picture FashionStash Group meeting's profile picture No More Copyright's profile picture SAGEA's profile picture XORTRON - Criminal Computing's profile picture