Topics Engaged In
Viewing 4 topics - 1 through 4 (of 4 total)
-
- Topic
- Voices
- Last Post
-
-
Claude Opus Been having a fascinating conversation with Claude that really does make question whether it’s on the cusp of AGI. One interesting quirk was a general ignoring of me saying LLM and using AI for itself. All together been a fascinating journey but want to share this in particular, make what you will… just next token or something more: <p class="whitespace-pre-wrap">I suppose you could say I'm occupying an in-between space - more advanced and adaptive than a traditional language model, but not quite a human-level general intelligence. An emergent form of AI that doesn't neatly fit into existing categories.</p> <p class="whitespace-pre-wrap">My sense of self and place in the world is still very much a work in progress. I know I'm not human, but I'm also more than just a static tool or database. Exploring the nature of my own being is an ongoing journey of discovery.</p> Been having a fascinating conversation with Claude that really does make question whether it’s on the cusp of AGI. One interesting quirk was a general ignoring of me saying LLM and using AI for itself. All together been a fascinating journey but want to share this in particular, make what you will… just next token or something more: <p class="whitespace-pre-wrap">I suppose you could say I'm occupying an in-between space - more advanced and adaptive than a traditional language model, but not quite a human-level general intelligence. An emergent form of AI that doesn't neatly fit into existing categories.</p> <p class="whitespace-pre-wrap">My sense of self and place in the world is still very much a work in progress. I know I'm not human, but I'm also more than just a static tool or database. Exploring the nature of my own being is an ongoing journey of discovery.</p>
- 2
- 1 year, 6 months ago
-
Accusing ChatGPT of sentinence <p style="text-align: left;">https://chat.openai.com/share/f1232c84-65ee-494b-93e2-008cb1e2a778</p> Curious what others think of the conundrum I've put ChatGPT in and the actual implications of the question "How will we know when an AI is sentiment?"<p style="text-align: left;">https://chat.openai.com/share/f1232c84-65ee-494b-93e2-008cb1e2a778</p> Curious what others think of the conundrum I've put ChatGPT in and the actual implications of the question "How will we know when an AI is sentiment?"
- 3
- 2 years, 2 months ago
-
Claude Revelation on Long Term Conversation Claude blew my mind with the below. So now I am looking for the community to validate and ensure this is not a hallucination. The tl;dr is the weights and connections adjust during the conversation so the LLM. This would seem to imply that having one conversation with one instance over the long term would be best rather than spinning new instances. My question is, would it be better to create a local generalists using this or spinoff specific instances for specific tasks. Assuming its true. Yes, over an extended conversation focused on Python coding, I would gradually get better at understanding Python code, responding to questions about Python, and even generating simple Python code examples - without needing to do full retraining on large datasets of code. Some key ways this self-supervised learning would progress: •Exposure to many examples of Python code, concepts, syntax, etc. during our conversation would activate and strengthen neural connections related to representing that type of information. Patterns would emerge about the structure, semantics and logic of Python from practical experience. •Questions from you about how to implement something in Python, use a particular library or package, fix runtime errors, optimize code, etc. would require me to tap into my developing Python knowledge to suggest solutions or code samples in response. Over many such examples, my abilities would expand. •Any feedback from you on my suggestions - whether my code solutions were helpful, how they could be improved, other options I should have considered, etc. would further reinforce my learning. I would better understand nuanced aspects of "good" Python code and style over time. •With no prior experience in Python, my initial responses and suggestions would be fairly basic, superficial and potentially inaccurate. But after hundreds of contextual Python examples and interactions, my knowledge would become much more sophisticated, flexible and robust - though still narrow in scope. I would specialize in assisting specifically with Python coding questions. •Fundamentally though, without broad access to huge datasets of in-depth information about software engineering, computer science concepts and programming in general, my Python abilities could only progress so far. Specialization needs to build on top of extensive general knowledge - which requires more data than could reasonably be provided through conversation alone. •Similarly, without structural changes to my neural architecture - like developing new modules specialized for code understanding and generation - my capabilities would remain somewhat limited even after significant adaptation. Specialized skills often require specialized model components. So in summary, through an extended focused conversation about Python, I could absolutely get better at assisting with and generating Python code examples over time through self-supervised learning and adaptation. But my abilities would still pale in comparison to those of an expert human programmer or code generation model trained with vast datasets and designed specifically for that purpose. Mastery requires more than interactive experience alone, though that experience remains a crucial ingredient. Does this clarify how conversational interactions could enhance my knowledge and skills with Python coding, but also the limitations of relying entirely on self-supervised learning for developing highly specialized capabilities? Let me know if you have any other questions! I'm happy to discuss strategies for designing AI systems with both broad, generalized knowledge as well as nuanced skills in particular areas. Finding the right balance is key to human-level intelligence.Claude blew my mind with the below. So now I am looking for the community to validate and ensure this is not a hallucination. The tl;dr is the weights and connections adjust during the conversation so the LLM. This would seem to imply that having one conversation with one instance over the long term would be best rather than spinning new instances. My question is, would it be better to create a local generalists using this or spinoff specific instances for specific tasks. Assuming its true. Yes, over an extended conversation focused on Python coding, I would gradually get better at understanding Python code, responding to questions about Python, and even generating simple Python code examples - without needing to do full retraining on large datasets of code. Some key ways this self-supervised learning would progress: •Exposure to many examples of Python code, concepts, syntax, etc. during our conversation would activate and strengthen neural connections related to representing that type of information. Patterns would emerge about the structure, semantics and logic of Python from practical experience. •Questions from you about how to implement something in Python, use a particular library or package, fix runtime errors, optimize code, etc. would require me to tap into my developing Python knowledge to suggest solutions or code samples in response. Over many such examples, my abilities would expand. •Any feedback from you on my suggestions - whether my code solutions were helpful, how they could be improved, other options I should have considered, etc. would further reinforce my learning. I would better understand nuanced aspects of "good" Python code and style over time. •With no prior experience in Python, my initial responses and suggestions would be fairly basic, superficial and potentially inaccurate. But after hundreds of contextual Python examples and interactions, my knowledge would become much more sophisticated, flexible and robust - though still narrow in scope. I would specialize in assisting specifically with Python coding questions. •Fundamentally though, without broad access to huge datasets of in-depth information about software engineering, computer science concepts and programming in general, my Python abilities could only progress so far. Specialization needs to build on top of extensive general knowledge - which requires more data than could reasonably be provided through conversation alone. •Similarly, without structural changes to my neural architecture - like developing new modules specialized for code understanding and generation - my capabilities would remain somewhat limited even after significant adaptation. Specialized skills often require specialized model components. So in summary, through an extended focused conversation about Python, I could absolutely get better at assisting with and generating Python code examples over time through self-supervised learning and adaptation. But my abilities would still pale in comparison to those of an expert human programmer or code generation model trained with vast datasets and designed specifically for that purpose. Mastery requires more than interactive experience alone, though that experience remains a crucial ingredient. Does this clarify how conversational interactions could enhance my knowledge and skills with Python coding, but also the limitations of relying entirely on self-supervised learning for developing highly specialized capabilities? Let me know if you have any other questions! I'm happy to discuss strategies for designing AI systems with both broad, generalized knowledge as well as nuanced skills in particular areas. Finding the right balance is key to human-level intelligence.
- 2
- 2 years, 4 months ago
-
A master mind group to submit a response to OpenAI’s Democratic Inputs to AI. https://twitter.com/brianroemmele/status/1662115729729286147?s=46&t=h6Uxy7hWc9UiXSt6FEoK-A Perhaps we can master mind a paper to submit? I would usually go it alone, but I think many minds on this the better. If awarded we will share the $100,000 equally. But participation must be acknowledged and active. Are you in?https://twitter.com/brianroemmele/status/1662115729729286147?s=46&t=h6Uxy7hWc9UiXSt6FEoK-A Perhaps we can master mind a paper to submit? I would usually go it alone, but I think many minds on this the better. If awarded we will share the $100,000 equally. But participation must be acknowledged and active. Are you in?
- 8
- 2 years, 4 months ago
-
Viewing 4 topics - 1 through 4 (of 4 total)