I can’t believe we are in the double digits discussion activities! This semester is blowing by … its been fun so far. Anyway, you ought to spend a little time reading Topic 10 … it will give you some good insight into the types of systems you’ll need to plan for when solving the 110 Challenge. Ok, here we go …
Discussion Activity
Computers that can evolve to improve themselves have been fantasized in science fiction books for years, but something similar has occurred at the University of Sussex in England. Adrian Thompson (Center for Computational Neuroscience and Robotics) works with computer chips that can manage their own logic gates by testing new designs and choosing the best configurations for a particular task. Two technologies make this possible: evolutionary algorithms are computer programs that can rapidly generate variations in their own code to evaluate and select the most efficient code; and Field Programmable Gate Arrays (FPGAs), which use transistors that appear as an array of logic cells to change their value and connect to any other logic cells as they are programmed. When these two technologies are brought together (shazam!), circuits can become more effective than similar circuits designed by humans using known principles. There is one problem — Adrian doesn’t know how it works.
So if a management system, which is a critical system, has been built using a working technology such as evolutionary algorithms and FPGAs, but we don’t know how it works, should we implement this technology? Is this technology reliable? Explain the possible advantages and disadvantages, potential for success and potential risk in your opinion.
The technology should be implemented on an experimental, small scale basis so that we could see how it works. This would probably help us to determine its reliability. The problem is if we don’t know how it works and something goes wrong then how do we fix it? How do we know if something actually went well or not? If implementing this technology on a small, experimental scale could teach us how it works, how to fix and identify problems etc. then I would say lets implement it and learn this new fascinating technology. But if we can’t figure out how it works etc. through experimental usage then the technology isn’t very useful. Getting this to work would be very helpful and a great break through in getting computers to operate, manage and repair themselves with little human help. More time could be spent on developing even newer technology if the technology already created could managage itself.
I believe the use of evolutionary algorithms and FPGAs would be extrememly effective especially for critical systems like a management system. The system will run more efficiently if it is able to manage itself without the help of humans. However, the computers themselves do not really make decisions. They just follow their Algorithms and FPGAs which are not 100% bulletproof and due to the fact that no one fully understands how the two technologies function. There is too much of a risk placed on the management system or any system that runs the unreliable technology.
It is vital to have human resources who are competent in the area of these evolutionary algorithms and FPGAs. Even if it takes 10 times as long to complete what the technology could do very quickly, there must be a reliable human back-up plan in case there are complications. It is up to the individual as to whether they would want to implement such technology, but I personally feel that the risk is far too great knowing the possible problems that could occur. It would, on one hand, present enormous advatages to business and boost the abilities of corporate america as far as data storage, and managment systems go, but with these advantages also come some substantial disadvantages. The main disadvanteage is if something were to go wrong with the system no one would have the personel to adhere to the problem, and therefore the system would be of no use after a problem were to occur.
Artificial intelligence systems should be outfitted in the industry today. They can save time, make decision making a more clear process, and potentially save companies money because they could hire less human resource people. For anyone in SMEAL, this doesn’t sound like a good idea. If a machine can go through lines and lines of data, analyze that information, and give a detailed output of that customer…that is such an advantage to the company. The company can use that information to help tailor their business to some of their better customers which in return, could help draw in more customers that are willing to put their money into this company.
These systems seem to data mine the history and potential of people who could be described as one of the best aspects of these systems because they are saving that company the possible risk of the consumer. For example, if a car dealership wanted to give a new customer a loan, they could look up their past credit history and figure out whether or not they are a liability. Even if this saves the customer just one potential problem, in this case the car company would be saving thousands on a possible debtor.
Even though these systems seem reliable and in fact are very useful to businesses, there must be a back up plan. Investments must be put into human resources because if the FPGAs or other artificial intelligence systems fail, then the companies could be in big trouble. I think the programs better be highly backed up by a terrific success rate because people and developers still aren’t even sure how these machines work! That is pretty crazy for a business to invest thousands and millions of dollars into equipment that can be so useful but also taking such a chance on trusting the equipment!
I think this is a no brainer. Would we launch major weapons, rely on security devices, or use any other type of technology if we weren’t exactly sure how it worked? It may be great technology that works more effective and accomplishes great things, but if we don’t understand exactly how it works it is potentially dangerous and ineffective.
In the case of reliability, how are we supposed to know how reliable something is if we don’t understand the technology behind it. The piece of technology could end up becoming one of the greatest and most efficient items ever and it could equally be as harmful. Plus, if the pieces are always changing and modifying their logic, isn’t that in itself hindering it’s reliability? In other words, if the FPGAs are differentiating based on logic, yet modifying and “thinking” won’t they make mistakes? That is why we use computers…because they are constant whereas humans are not. If you add the human ability to think to computers, you add the human ability to make mistakes.
The biggest problem I’ve always seen in A.I. is when you remove the constant nature of computers, you’re removing what we rely on them for. It doesn’t make sense to have artificial intelligence when you have genuine intelligence.
Overall, the idea of A.I. is something that I’m uneasy about. Seeing what technology is capable of and simply imagining that kind of power out of our hands is something that scares me. Even if it’s used for simple tasks. Maybe the Terminator movies have affected me a little too much.
I’m looking forward to seeing the other responses to this discussion activity. I’m sure we’ll get opinions similar to the opinions that the topic of stem cell research generates. My opinion on this matter is simple; I don’t believe that we should use technology that we don’t understand. It can only lead to disaster in the end.
First of all, if you don’t understand how something works, it is impossible for you to use it to its full potential. For example, would a football coach implement a new offence that he didn’t understand? No, of course not! He wouldn’t be able to teach his players the offence, and surely would not be able to use the offensive scheme to its full potential. The same is true with technology, if you don’t understand it yourself, you can’t teach others about the new technology or use it to its full potential.
Now, I come to the most obvious reason that makes me believe that implementing technology that we don’t understand is dangerous. That is that it can get out of control. It is possible that I’ve watched to many science fiction movies and TV shows, or read to many science fiction books. However, I still believe that there is an inherent danger when messing around with forces that you don’t understand. Ok, so maybe the machines won’t take over the world and enslave the human race as they did in the Matrix trilogy, but the consequences for machines with minds of their own are very real and very serious. Following down this path of thought, brings us to a very ethical debate which I will refuse to engage in for this exercise on the grounds that ( to put it simply ) I might piss people off. Science ethics is another discussion for another DA possibly. Anyway, back to the possible danger of melding with unknown forces. Let’s take the Atom bomb for an example. In this day and age we are still discovering side effects associated with it. When the decision was made to use the Atom bomb, they understand its power, or at least they thought they did. They didn’t understand its full potential and all the harmful side effects associated with it. Now that we do understand this, we go to great lengths to make sure this technology does not fall into the wrong hands and this technology is not used anymore. It’s more of a threat than anything.
Without offending to many people, I hope that my position regarding the use of technology that we do not understand is clear. Why would we want to waste resources, that is use something, but not to its full potential? Using technology that we do not understand is not only foolish but dangerous as well. We can’t even dream about what could possibly happen ifs…. because we simply can’t guess at what our own technologies are capable of doing. For the good of the human race and everything else in this world, I believe that it is foolish and dangerous to ignore the inherent danger in attempting to implement technologies that we do not understand.
Implementing a technology that someone does not know and understand, like evolutionary algorithms and FPGAs I believe is risky. How would you understand the results the management system is producing for you, or what would you do if something went wrong? Would you be able to understand the problem and correct it? My guess is that you wouldn’t be able to do anything, because you wouldn’t understand the management system and how it worked. The purpose of a management system is to perform economic transactions, share information, cooperate on joint business ventures, make decisions, and communicate more effectively. In order for it to be useful to its users, its users need to understand the management system completely and thoroughly. Not understanding the technology you are using is what leads to disasters, and puts a limit on the potential usefulness of technology.
Technology has to be understood when using it. By not understanding what you are using can pose possible dangers and be extremely risky. This intern makes the technology very unreliable, because you have absolutely no idea as to what is going on, and if what results are being produced are good or not. The outcomes are unpredictable, and I do not know if that were a chance we would want to take.
Overall I cannot see any advantages in implementing a new technology that no one knows anything about. If no one knows anything about the new technology and something were to go wrong how would the problem be fixed? It could not be fixed and the technology would become useless, which is exactly what it was before since no one understood how it worked or its usefulness. Knowing how technology works is what makes it valuable to people.
It would be unforeseeably dangerous and reckless to implement evolutionary algorithms and FPGAs without, indeed, knowing all potential advantages and disadvantages in their usage. To this extent, I would, with many, agree that it should not be employed with our current lack of knowledge. It sounds like and extremely advanced discovery but without a thorough understanding it is potentially harmful, useless and unreliable and above all pointless.
For a management system it would prove to be extremely reliable for cost effectiveness, advancements above and beyond all recognition, and superior in competing with all those without this technology. It could boost revenues by cutting labor and technology costs, and at the same time create jobs in new fields of study. On the other hand, it could benefit enemies such as terrorist, who could potentially use it against us. For example, they could plant viruses or create defected chips to breakdown systems or other malicious activities.
In my opinion, the risks and disadvantages (associated with evolutionary algorithms and FPGAs) far exceed the advantages and benefits for using such technologies without first understanding it completely. In any case of difficulties managing this technology we would be helpless in creating a feasible solution, which in my opinion is extremely terrifying. The more you think about it the scarier reality becomes. And if there are those who are already implementing such technologies (without first comprehending all aspects of it) then we could be in a world of trouble. And, so it has begun, a society clearly, but not entirely, dependent upon computers.
Before reading the article, and just skimming through the question earlier this week when it was posted, I thought yeah why not implement this technology? What do we have to lose? However, a management system is a lot more crucial then I knew. A management system uses the TPS’s databases. Thus, several people depend on a management system. Requesting summaries, inventory, and using the data stored in the database.
Seeing how so many people rely on this for different reasons, I think implementing this technology is a huge risk. However, I do think having a group people use it and experiment with to catch bugs and problems and come up with solutions is what should happen. Then there would be several advantages to it, seeing how it will promote even better things in the technology world, improvements and advances. We don’t know if this technology is reliable until we try out. We cant judge and say its not, even though there is a large chance it isn’t, seeing how its not understood how it works. But as I see it, learning how it works is just the next step and back tracking everything to figure out problems and how it was created is what should be done before allowing businesses, people, etc to rely on this for data output. The potential for success is just the same as the potential for risk. This could be a big a success and this also could be a big downfall and a lot of money, but once again we wont know till we try. But it should be takin in small steps and not fully put out there. Since we don’t know how it works or what exactly will happen, we can’t risk safety and massive problems that may come along with doing so.
I believe in the motto: If you don’t try something new, nothing will ever change. New technology is constantly being created and we need to utilize it to its potential. It never hurt anyone to try something new and educate ourselves on the new innovations. Where would we be as a society if we never tried new things? Evolutionary algorithms and FPGAs are the new technology we must open our minds up and give it a shot at least in the experimental stages.
The experimental stage is critical in finding if the new technology is beneficial or just a waste of time. We must try to educate ourselves as much as possible in order to integrate the technology in society. I would not fully implement it into reliable systems until we try to find out as much as possible and make sure it is a stable technology. Reliability is something that will not be gained until years of confidence until we figure out what the evolutionary algorithms and FPGAs can do when operated to its full potential. You never really know what is going to happen with new technology like that until you try it out. It could revolutionize the computing industry or it could just be a terrible waste of money.
Some advantages with the new technology can range from almost anything possible. You could create ‘smart computers’ or somehow integrate these new innovations into something that would make our lives easier. Some disadvantages would be a lot of wasted resources such as time and money but I think it is a small price to pay if something great comes out of it.
There will always be some sort of risk to take but it depends on the rewards and consequences. I believe in this situation the rewards outweigh the consequences. We must as a society keep inventing to farther us in the technology age. There is no reason to stop and the best thing is you never know what is going to be created next. It is an exciting to see all this technology and the benefits it can have on us. I hope they pursue the new technology and find some very valuable information.
I think when asking a question of implementing a critical system such as a management system, without knowing how it works, it’s fairly certain that there will be major problems associated with implementing this technology. When dealing with management systems, it is necessary to weigh out the risk vs. reward when so much reliance is placed on management information systems in the business world.
I think the idea of implementing a working technology such as the evolutionary algorithms and the FPGAs is a great idea for a prototype but without knowing the in’s and out’s of the system, what would happen if something were to malfunction? Who would be responsible for something of this nature? Without having people with prior knowledge of such a system, I think it would be far more of a risk than a reward at this stage. Maybe if management used the help of a DSS to help make such a crucial decision about the future and such an investment, that would help them choose the best decision. Otherwise, I think it’s pretty easy to say that without knowing how something works, no one would be willing to risk such a vital aspect within company. So in my opinion, I feel most conscious business would stick to the model of: “If it isn’t broken, don’t try to fix it.”
I find it somewhat disturbing that technology that we don’t know how it works was implemented at all. I mean, I can just imagine buying an airplane ticket, finding my seat, and then the captain telling the passengers, “Don’t worry, I stayed in a Holiday Inn Express last night!”
This toss-up question reminds me of a fundamental question that might have determined how people voted in the past election: Stem-cell research. The possible advantages are astronomical in number, but there are “what ifs”.
In this case, the advantage would be obvious, as a self-evolving computer would kill the “every six month” rule for buying new equipment. Not to mention that the companies could save money as they don’t have to pay people to develope the fastest MIS. The disadvantages, I believe, heavily outweigh the previous advantages. These are MIS’s that could determine whether an employee has been producing up to standards or way under par. To me, we can’t put someone’s livelihood on the line without knowing what could happen to that “dependable” MIS that we don’t know how it works.
Another disadvantage would be in the subject of repair. If anything were to happen to the FPGAs or the algorithms got messed up, can we be sure that the computer would evolve the same way or pace as before the malfunction or repair?
As far as reliability goes, I think these would have to be under scrutiny for a period of time in a sort of test phase. Bottom line is: We can’t just implement something just because it COULD save money and time, not to mention just plain cool. I feel we should at least know the very basic, very fundamental question of how something works before it becomes everyday equipment.
After reading the other entries I would have to agree with what was already said. Although I hate to repeat what has been said, I take the same stand.
With that being said, I think this type of technology could be very useful in the future, but as for now, it should not be implemented. However, I do believe that it should be highly tested and implemented on experimental systems so that we might be able to learn how it works, and then learn how to implement the system safely into the technology world.
It is hard to judge whether or not something is reliable without testing it or knowing how it works. Knowing how it works could allow insight to predictions as to how efficient the system will be, how long it will last and what problems may occur. So until the system is tested, I don’t think anyone can say for sure how reliable it is.
The advantages of implementing the system are the fact that it will be faster, more efficient and could possibly require less maintenaince and man power to manage. Also, depending on how the system is used, it might not have to be updated as frequently as these systems “learn” from patterns and data from the past.
The disadvantages, as of now, consist of one. We don’t know how it works. That disadvantage alone could lead to many problems such as what if something goes wrong, how do you fix it? If you know how something works, you know the disadvantages of it, however, if you don’t know how it works, then you don’t know what to expect in terms of a negative response.
In terms of potential risk and potential success, I think this situation can be looked at much like a new medicine or drug. There are side affects to these drugs, yet they help to cure a problem. With the system being implemented it would be like taking a new drug that has not been testing, it may solve some problems and be helpful, but we don’t know the side affects, and sometimes, they can be extreme.
Without knowing the background of the FPGA’s, it would seem to be very risky to implement them into the managerial structure. While the benefits could outweigh the problems, it would be very risky to attempt such a task. Somewhere down the line, it will seem prehistoric to not have this “artificial intelligence” planted in the companys scheme. It is possible to use this technology in the current scheme of things, but I feel that a human backup would be needed at least for a short while until the company could adjust to the changes. While it would be very beneficial to the company because of it’s greater efficiency, it could also cause some greater problems. Without people like Adrian Thompson knowing how the technology works, it seems as though problems would not have the ability to be troubleshooted. Therefore, if something goes awry, who do you call to fix it? Of course the chips will try to fix it themselves, but what if this is not the outcome that you want? Can you override the chips decision making? Suppose the chip analyzes the task it is given, then picks the best code out of the many variations it can produce. When the chip begins applying the code to the task, someone notices that there is a problem. It almost seems impossible to change the computer chips decision since it is mathematically based, which is almost kind of scary. I think that businesses could use this technology sometime in the future, but not before the experts can determine how the whole process works. Once all of the kinks are worked out, it will be amazing to watch the efficiency of even one computer increase dramatically.
The whole idea of having a computer that improve upon itself is something many people have been trying to do for years, and now we may have a way to do it. Without actually knowing how it actually works is just another obstacle to get past to really being able to implement this technology. This technology may be good to businesses in terms of productivity and the speed to which things get done, not to mention the money saved with personnel. But do we actually want to be able to have computers that can improve themselves, almost eliminating the need for the people who are there for the IT in the business?
The first problem, as mentioned earlier, about this technology is that the people working on it are still not completely sure how it works. This will probably end up being just a small obstacle. What needs to be done is a small scale test/implementation using this technology. Observations need to be taken of this technology at work to completely understand how it works. Using this on a very small scale and recording observations about it will not only help us understand it better, but be able to prevent against a possible problem by just going ahead and implementing it in business and having some sort of problem that we don’t know how to fix. I think a better understanding of the technology will lead to an increased reliability of the technology.
With the advantages of speed and productivity come some disadvantages. If this technology can improve itself, do you really need people to monitor the system and make sure it’s running correctly? What if the system does something wrong? If it believes it to be the right operation, it’s not going to give any sort of message that something wrong happened, it’ll just keep goin’, doin’ the same thing. There will more than likely be people trained to monitor this type of system though. In the end the advantages offered to businesses will more than likely outweigh the disadvantages of these systems.
Overall I think that whenever this technology is completely understood, the people implementing it will be able to make the decision as to whether or not the technology will work for their individual need. There will also be people that will learn this technology and how to use/maintain it, so that the risks are taken out of using the system.
This post has been removed by the author.
The question as whether or not we should implement this technology has a broad range due to the definition of implement. If we say that implementation is considered the use of the technology, then I would have to disagree on the complete use of the two technologies. But if you say that implementation could be considered a trial, then I would be approving of it because we of course do not know the end results, and a larger scale could cause problems, we really do not know.
So to say if this technology is reliable cannot be determined because we once again do not know, but with what we know now, we can only infer that it will not fail, and it will be reliable.
Potential success could breach bounds of the unthinkable. Wtih this use in a management system, money and more importantly time would eventually, but quickly be accumulated and saved. But the risk of course is the fear of that super artificial intelligence that superceeds all human intelligence and overthrows us, and then of course another risk is mere failure, it doesn’t work, but it’s doubtful that there will be no result of that sort.
So I believe that implementing this technology is a good idea on a small controlled scale.
I believe that the technology should be used in a controlled enviornment. Test the technology and figure out how it works before implimenting it into the real world. If we do not know how it works, how will we know how to fix it when something goes wrong. It would be a new step in technology if these new chips were reliable, but there is no way that you could put these chips out on the market without knowing how they work.
Without knowing how the new technology works, how are we sure that the technology will be controlable. If the chip can ( make its own decisions) what is there to stop it from not taking our commands. I know that is not the idea, but it is the direction we are heading.
I think that under some circumstances it might not be such a terrible idea to implement certain technologies especially when previous sets of technologies weren’t working. However, that being said, I think that the nature of this type of system where its almost like a virus the way it morphs and changes shape to reflect the conditions that suit best, is a dangerous proposition. To install a system into a critical network would be foolish and premature. There are the obvious pros of it being able to morph itself and adapt to the changing hardware and software conditions, but at the same time, without a reliable prototype of the system, there is no telling how it will react to certain situations.
The cons are certainly obvious and to trust this type of management system using FPGA’s in any crucial network is dangerous. For one, the potential risk is TREMENDOUS. If the system were to react adversely to any conditions and there weren’t trained personnel to fix it, what would we do? Its difficult enough to fix systems where we know how it works, let alone one that has the capability to MORPH itself that we haven’t a clue how to work it. We could lose entire populations of data and operational functionality if one of these systems ever encountered a bug.
There are pro’s though and I think that the possibilities are intriguing. First, if there were a type of system that could “heal” itself, it would eliminate the need for many software upgrades and it would be able to recognize vulnerabilities in its code before even the most skilled IT guy could. So as a cost saving mechanism, the possibilities are endless. From an efficiency standpoint also, it would be interesting to see just how much easier it would become for organizations to administer changes and shifts in ideologies with software that had this on-the-dime capability to alter its code.
Overall, although its an opportunity worth exploring, I think that without a working prototype that we can understand, it makes no sense to use a system like this in any network. Just the nature of the product alone leads to a certain amount of uncertainty, let alone not being able to fully understand the ramifications of how it works and what it does. A few more years and FDA-type drug trials would be the wiser choice before an introduction into mainstream markets.
This seems like an innovative idea. The only problem I see in this is if there was a flaw or some problem occurred with the program, how would you fix it if you don’t know how it worked. I understand that it may look like it can fix itself, but there can always be a flaw in its programming. So I would encourage that this would be tested out first. Maybe in a company with-in the boundaries of the college so that it could still be studied by the college.
If this new technology were to function properly, however, it would seem like a good idea for big companies that need a system to manage a lot of data and proccess it. It would save the company money but it would also decrease the number of jobs for people. If problems were to occur however and this technology produced incorrect information, it could affact the decision-making of the company for the worse. I think that is why someone needs to figure out how it works so it can find the flaws easily and fix them.
Implementing a system when you don’t understand how it works could prove to be disastrous. Working technology like evolutionary algorithms and FPGAs sounds like it could revolutionize technology. I don’t think that it should be implemented until we understand how it works. Not understanding the system means that humans might not gain full potential when using the system or not be able to fix it when something goes wrong. I wouldn’t perform open heart surgery because I don’t know anything about it; it’s the same principle with technology. This technology could possibly one day help us develop better solutions.
There are obvious advantages to implementing this technology now: better solutions, more effective circuits, ect. We could move forward in the way we do things and get them done better. I feel that the disadvantages out weigh the positives now. If you don’t know how it works, you don’t know what could go wrong.
OK, I’ll try to take this from both a business point of view and a technical POV.
There are millions of business decisions made every day. It would be incredibly easier to have a machine make them for us, wouldn’t it? Well this ‘new’ technology does just that. But if I were in the position of CEO or on a board of directors and this AI was presented before me, I’d definitely think it twice. Not because it’s new, but because the theories haven’t been fully understood. And you have to think, even if we would implement it in an unimportant department of the company, eventually the decisions that were being made would impact more crucial systems somewhere down the line and WOULD ultimately cause devastating results. Until the decisions by the AI chip can be predicted 100% of the time, pay the employees the extra little bit to make sure that the data, interpretations, and decisions are correct.
For the technological viewpoint, this could/would be an awesome experience! Imagine having a computer that takes complete control over servers and such when a crash would occur. What about the idea of a computer writing software for humans? You couldn’t go wrong there. Set it up on a test server and watch what happens.
The possibilities for this technology are endless. The potential for animated decision making is high, but its costs are what make the decisions. Not the costs of the technology, but the costs of its repercussions. I personally feel that it should in NO way be used in any significant system.
I don’t believe in useing technology that we don’t yet have an understanding of. If we do not know how it works or what is will do then why would we risk using it.
I feel that there should be more testing and research done before we use this technology in managment systems. Even if it is critical for managment systems to use technology I feel that we should use the same things that we have been just for alittle while longer while we do more research on these new technologies.
So if a management system is built using evolutionary algorithms and FPGAs, and we aren’t exactly sure how it works, a lot of problems can arise. There are definite advantages and disadvantages, potential for success and potential risk. By looking at the positives and negative, one could come up with an assumption as whether or not these systems would be reliable.
Some advantages include having a program that doesn’t need human involvement, but only to write it. It can potentially solve various problems that have been in the air for some time. Other advantages include the fact that writing these programs could potentially put more effort on others that haven’t been focused on enough and then more problems could then be solved.
But with every great thing, there are always disadvantages. One could be the fact that if we don’t know how it works, how do we know if it would actually work? If we implement these systems will they actually do what they are suppose to? Those are some of the underlying problems that the technology could potentially face. This technology may in fact not be reliable.
So if this technology isn’t a clear cut positive or negative, what can we do? Well maybe using it on experimental basis would be a first step. We don’t want to use something that could have a negative effect on everything. As cautious workers, a system like this could potentially harm the world, so using it on experimental basis would be a safe bet. I don’t think it would be that reliable of a system if we don’t know very much about it. Maybe finding out more information about it would be the smartest thing to do. I think the negatives definitely outweigh the positives.
I think we should utilize the new technology that makes the circuitry more effective on a limited basis even if the developers do not understand excatly how it works. Even if it were only used on a trial basis it should be tested in some manner or fashion in a controlled situation until it is analized as to how it works. Perhaps initial testing on something not critical. How better to test it than to use it. Nothing ventured nothing gained. I don’t think that would be the first time that something was implemented without the developers knowing the incricasies of how it works.
If the technology were to reveal itself as reliable, it would help companies become more efficient and profitable. On the other hand if the chip begins to alter it’s circuitry in a manner that is harmful to itself or data it is managing that could prove to be costly. Experts should first test the technology to the best of their ability, and debug the system prior to the releasing the information.
Implementing technology that one does not know anything about nor know how to work can have major effects and result in many problems.
The thought of using evolutionary algorithm’s and FPGA’s with management systems is no doubt a good idea and will probably have major benefits but if we do not know how they work then problems associated with the technology can possibly outweigh the benefits, and cause major harm to management systems that we rely heavily on.
Just as any other type of technolgy, algorithm’s have flaws and at times malfunction and need to be up-dated. If we do not know how this sytem work then fixing the problem, or even determining whether a problem actually exist, will be almost impossible. Since we will not be able to fix or determine the problems that the sytem may encounter then it is obvious to say that this will not be reliable and information that we put on these management systems can be incorrect, out-dated and questionable.
Management systems consist of software that is made up of application programs that are designed to manipulate data in a database. It can also accept requests for information, access data in a database, process this data, produce outputs based on this data, and even update a database. So, basically, there is a lot riding on management systems first all cause they work with database management systems (DBMS), which we already know are important features in businesses and other IT aspects of our society and other societies. Management systems make operational decisions for companies, that is a big deal, and if we are not sure how these systems work, even if it makes a system more efficient and at times more effective, who knows what else it could do?
Evolutionary Algorithms and FPGAs are the working technology that is what a management system, by Adrian Thompson, at the University of Sussex in England, are built on. The problem is although both concepts on their own are understandable technologies, for the concept of computers that can evolve and improve, he has put the technologies together, and he does not know how they work. But, they are working, so, should we implement such technologies. If you are a better than why not, they work, just be prepared for the worst. You might have a management system based on these technologies and it is supposed to decide how many things it is supposed to order and it might, who knows, order the wrong part, because of something that we do not know about in the merge of the two technologies. So, maybe we should just keep testing it, experimenting with it, to see what the possible risks are and maybe even discover how they are working together. The obvious advantages are the added efficiency and the care free attitude one will be able to work with, the computer will be doing all the work. But, if you do not know how something works, then it is easily conceivable that it might do something you aren’t prepared for, it might work some way you did not want it to. Obviously, the technology is out there, I just do not think it is quite ready for us greedy Americans to put our money on it.
Although this management system would definitely be a great asset to us in the future, right now it should obviously not be implemented. Management systems are relied on by all sorts of businesses and organizations. If we are unsure of how the system works, how can rely it? Let’s think about this…if the system fails to work every time, how do you fix it. The answer is, you can’t. If you don’t know how it works, than you can’t fix it.
Adrian Thompson has developed chips that mutate by themselves, reforming their circuit structures over and over again to find the configurations that work best. The thing is, I don’t have the slightest clue of what that means. So I have to ask myself…do I really think a chip would be implemented if it hadn’t been thoroughly tested? Who am I to say that a chip like this would be unreliable? If the chip is tested in every situation it might be used, then who cares how it works. And I’m going to go back on my initial statement, if you don’t know how something works, than it is not definite you cannot fix it. Obviously how Thompson discovered this chip was through trial and error. This can be done again if the chip fails to work. Pierre Marshal, who leads research into new computer architectures at the Swiss Centre for Electronics and Microtechnology in Neuchâtel, says about Thompson’s discovery, “You can adapt it, just as the immune system adapts to new diseases.â€
So I guess I completely contradicted myself, but when reading some Discussion Activity questions I jump to conclusions based on just reading the questions and the textbook online. However, for this DA I went further. I looked over some articles and read over things that professionals had to say about this specific DA. So the first part of my DA this week was just my opinion based on the question and readings. The second was my opinion after I looked further into this problem.
I think it is completely irresponsible to use a system that you cannot understand the workings of. Suppose the system crashes — do you really have anyway or knowing, why, how, or what happened? How do you explain to whoever is affected by the problem that you don’t know what to do to correct the problem? I just think from any standpoint you want to look at this, it’s a bad idea. The technology is very interesting and will definitely be a great tool once we can understand it, but it just seems dangerous and unnecessary to use these types of things without knowledge of their makeup.
You wouldn’t hire an employee to handle that much important information about your company without knowing a good bit about him or her first.. So why should the machines you use be any different?
This technology reminds me of artificial intelligence in the way that it can correct and modify itself to be more efficient. It’s also scary having seen movies like AI. I think in dealing with anything that has a mind of it’s own, you really need to have a basic understand of what makes the thing tick.
Personally, I would never trust anything on one of these systems.. Not yet at least.
In a management system, we should not implement the working technology such as evolutionary algorithms and Field Programmable Gate Arrays (FPGAs), since we do not know how they wok, therefore, there are potential risks which comes with these technologies.
The evolutionary algorithms and FPGAs are not reliable, in the passage Adrian Thompson mentions that he did not know how the systems work. Therefore, we have no clue on those systems. There are high potential risks associate with these systems, which leads to bad consequences in making management decisions. For instance, in a business transaction, we have installed both systems in making customer orders, taxes paid, inventory levels, and production outputs levels, when these two unreliable systems are installed, the business transaction might lead to an unwanted errors, and interrupt the business routine. Also, there might have a possibility of system breakdown, however the stock exchange market can not afford the breakdown in business processes. Therefore, the risks in evolutionary algorithms and FPGAs make them both unpredictable and unreliable.
The advantages for these systems are that they can increase the accuracy and efficiency for the management process. Since, every transaction process could be done through systems. Also, Mr. Thompson said that the technologies would let the circuits work more effectively than other similar circuits. Moreover, these systems also encourage the paperless transaction, which would decrease the human error when dealing with the paper works and improve the accuracy. Besides, these technologies will reduce human workload, since these technologies can replace some of the human work process.
However, there are many disadvantages concern with these two technologies. First of all, they are unknown and unreliable to the business process, since Mr. Thompson did not find the way the system works. We can not make any forecast out from these systems, which is very undesirable in a business transaction. Also, the risk outweighs the benefits that we receive from the technologies. The risk of sustaining the stock markets and affecting the financial institutions processes. Thus, it hurts the normal business management.
All in all, we should not implement these risky and unknown technologies into the business. However, with more testing and experiments conduct with this new innovation, we hope that someday we will witness these evolutionary algorithms and FPGAs and improve the business process.
I think that this technology sounds phenomenal and it could potentially be one of the greatest innovations of all time. However, I think that one of the worst ideas you can have is to start implementing a product before it has been thoroughly tested and is completely understood. You cannot just throw a product right into the market because it has great potential. This was done with certain diet drugs that had great lab results on animals, but then they were sold to people and it was determined that they caused serious health problems. It must be 100% understood how these evolutionary algorithms and FPGA’s work before they are put to work especially if it were to be put into a management system to make strategic decisions or anything more then basic operational or day-to-day decisions.
I also have to add that I believe that technology is very reliable – if it has been through rigorous testing and reengineered over a number of years. For example, almost every accident involving a plane is due to human error. The computer systems on planes almost never fail and if they do it is usually the physical aspect that would fail not the logical aspect. I would rather much be in a large jet airliner completely controlled by the computer (which most are today) then in a private plane completely controlled by the pilot. However, if the management system was brand new, and the airlines didn’t know how it worked I would not set foot on the plane.
After much testing I believe this “learning†technology should be implemented in situations where it will be completing operational tasks that are of less vital importance. This would be kind of a trial period for the technology and if it passed this stage then it could be implemented in management systems with more complicated tasks on a more strategic level.
There have been many movies and books that feature computer software that evolves and thinks on its own such as Terminator 3, 2001: A Space Oddessy, etc. In these films the computer in question decides to go rogue and kill humans. Do I foresee that happening with untested evolutionary systems? No.
But I still do not think they are wise to implement on critical systems. They should frist be fully tested and explored on closed test systems, possibly backup copies of current important systems. That way, we can be sure that the technology is sound and reliable. If it does prove to work as well as thought, then it could decrease data proccessing times which is a benefit. If it does not work properly, then the evolutionary algorithms and FPGAs might start choosing inefficient variations of the code and thus increase calculation times.
By testing the technology properly I feel it has the potential to decrease calculation times and data proccessing times. Even a tiny decrease in time needed to perform calculations can make a huge difference on large scale calculations.
This has been a topic of much debate over the quarter century. People have speculated that we’re going to be over run by computer integrated machines and tons of movies have been developed because this is such an interesting topic. Matrix or even space Oddessy 2001 have shown a world where humans are under the control of robats. Artificial intelligence is means to develop the human psyche. It’s faster, more efficient, and such an essential tool as humans integrate to the twenty first century. But should we use this? In the movie “war games” a computer system is developed as a sort of “homeland security” during the cold war. But it’s effects showed that if we allowed computers to think for us that consequences can be devastating. Yet is the benefits worth more than the costs? History shows that we’ve used things in the past without understanding how it worked. We knew seeds grew into flowers with water and sunlight, but never knew why? Did they have devastating effects? Of course not its how humans implement them to their measures thats what is dangerious. A machine or AI is a result of a human creation. How we intend to use it is the problem. The saying, “Guns arent the problem, it’s whos holding the gun is the problem” correlates exactly to the this issue. If we’re going to design a system that will specialize in warfare, of course something is bound to happen. But with efficiency such as business purposes or even for advancement in the medical fields, it’s difficult to see the negative effects. Reliability is always an issue even with technology we know and understand today. If we’re talking about reliability in that, does it conform and do what we expect?, of course we should implement this. What could the consequences be? Life is lost because we’re experimenting with a system? Everyday we’re taking pills that doesn’t necessary work. We you Chemo as a method of therapy for cancer. Chemo is thousands of chemicals thrown together that won’t necessary save you, in fact it could kill you as well. But, once again any new invention has a trial period because we don’t know the results. AI or these 2 new technologies shouldn’t be treated any differently.
This type of technology will turn out to be very useful sometime in the future. Once we are able to run tests on this technology and gather as much information about how it works then we will be able to implement the technology successfully. However, as for now, we do not have enough information about the systems in order to implement this technology.
We can not be sure of how reliable the system is without running tests and gathering information about these technologies and they will interact with the systems. As for now, the technology can not be marked as reliable. But, we also are not able to mark the technology as not reliable because we have not run tests on it yet. So, once we know how it works we will have enough information to make an informed decision.
This technology can prove to be very useful for the future. If we do successfully implement this into our systems it will provide for faster and more efficient technology, and another major benefit would be we will not have to update our systems as frequently and it may allow for smart technology to learn and create more efficient systems than ones that are man made.
The major disadvantage at the moment is the fact that we do not know how the technology works. If we allow for many tests to be run so we can learn more about these technologies and gain insight into how they work then this disadvantage will no longer be. So after we have taken these steps we may run into other problems that are at the moment unforeseen, but as for now we only have one main disadvantage to implementing this technology.
There is a huge risk to be taken if we attempt to implement these systems at the current period. We may invest time and effort into making these systems work and then they may not meet the needs of our users. However, on the other hand if we implement these systems now they may successfully work which would save much time and effort. Then, from there we would be able to gather information as to how they work as we already began to implement the systems.
I believe that evolutionary algorithms and FPGAs should definately be researched and implemented. This is a breakthrough technology that could be very helpful. We designed computers to help us solve problems, and this circitry programming is just another problem that they can help solve. If computers can do it better than us, we might as well just have them do it. There are, although, some problems with this. If we don’t even understand how it works, if there was ever any problems, we would be at the mercy of our computers. We would first have to learn how to under stand it, and only then could we begin to try to fix it. We would be stuck. I think that with the implementation of these programs there should also be an automatic backup function that returns the state to a known configuration made by humans. This would help prevent some of the possible problems that could be encountered.
Over the past few decades, technology has evolved at an astonishing rate. Researchers have been striving to improve existing systems and invent new technologies that can make our lives easier and more productive. In this age of automation, it comes as no surprise that now we are trying to create systems which can evolve and improve themselves without a human component. To some, this latest jump in technological advancement may seem worrisome, as perhaps it should, but it seems to me that the use of evolutionary algorithms and FPGA’s present an undeniable opportunity to learn more about and improve technology and management systems in particular.
I would not go so far as to say we should implement this technology in its present state, but with some carefully monitored testing, I believe this technology could be developed to a point at which it would be both reliable and beneficial. Implementing this technology in its present state, however, would be premature and would result in the creation of a system that we can’t even fully comprehend, let alone realize the potential of and be able to use properly. Furthermore, it would be difficult to troubleshoot a system involving these technologies if an error occured on a logical level. I expect that it would be possible to repair any hardware damage, since the make-up of the system would be known, but if an error occured in the manipulation process that we don’t fully understand, fixing this would be much more difficult and costly. This unpredictablity makes the technology in its current state unsuited for the fast-paced and demanding business world.
Implementing a system that allows researchers to monitor and record the self-modifications that the evolutionary algorithms and FPGAs make, and the conditions surrounding these changes, and later analyze them seems to be the best course of action in determining whether these technologies can be honed to usbale levels in the future. If, down the road, these modifications can be mapped and appropriately restricted, this technology could be an amazing boon to information and business systems. A system that can adapt itself and reallocate resources to perform certain tasks with maximum efficiency would save time and effort on the part of administrators which translates into money.
Technology can only advance as far as we allow it. If we are always fearful and afraid to take risks, monetary or otherwise, our gains will be modest. I found the parallel mentioned in previous posts between this topic and stem cell research particularly relevant in this aspect. Overall, these algorithms and FPGAs present a fascinating opportunity for advancement, and I think this should be explored.
I really do not think that implementing this technology would be really smart. WHy would we put something on the market if we do not understand it ourselves. That is a scary thing and what happens is the robots take over and destroy us all when we do stuff like that. I mean thats just a weird scifi kind of thing. To install this system into any kind of network would be crazy The pros are that it can morph itself and adapt to the changing hardware and software conditions, but at the same time, without a reliable prototype of the system, there is no telling how it will react to certain situations.
The cons are certainly obvious and to trust this type of management system using FPGA’s in any network is dangerous. This risk is a very big risk and dangerous enough to destroy a network. If the system were to react bad to anything what could we really do. We would have no personnel to fix this problem and we would be stuck with a corrupt network. Its difficult enough to fix systems where we know how it works but when we dont know how it works its not possible to understand the problem. We could lose tons of data for implementing a system taht does not work into a network.
The pro’s i admit are amazing. There are so many different possibilities if a this system could morph to fix itself. This could get rid of all that downloading a new software every time another virus comes along. This could really help Personal Computer users because we are always stuck downloading more and more stuff and software updates so we stay in touch with techonolgy. This could also save money because if a system can fix itself then what do we need for repair guys.
Although I think this morphing system is a great idea we should definlity figure out how it works before we put it into any critical network. What would we do if this network shutdown?? We could never figure out the problem and the bugs could destroy the ntework.
I believe that we should not implement the evolutionary algorithms and FPGA’s into any management system, big or small for the simple reason that we don’t know how it works. Until we perfect A.I (Artificial Intelligence) computers are still just obeying algorithms and no matter how complex the algorithm is there can always be an exception that can crash the system. This in a management system can lead to disaster. But there are positive things that can come by using automated algorithms; improve communications, less man hours and we don’t know limitations. And not knowing the limitations can prove to be a very dangerous or effective because it can lead to things we never thought of. It seems to me using evolutionary algorithms only could be used safely structured decision making because the decisions are expected. If the decisions are not expected there is a chance that there is an exception. So incase of that exception we need to know how the systems work before implementing them into any real world management system.
As the readings said, management systems are a critical part of the world of technology today. Without them, we would not be able to manage all the different systems and data that we constantly use. The concept of using so called artificial intelligence in management systems could be a breakthrough in the way that these systems are run, but they could also cause major problems in the management systems. Why could there possibly be problems? It is because we just do not know enough about this technology yet.
I think that using these evolutionary algorithms and Field Programmable Gate Arrays (FPGAs) would be very beneficial if they were implemented in management systems. However, I do not think that the use of this technology should be implemented into everyday systems right away. I think the systems need to be tested more and that more research needs to be done before implementing this technology. I mean, yes, the technology would be great and great things could evolve from it, but there could also be negative outcomes as a result of implementing it. I believe this new technology should be implemented in an almost trial and error method, starting in smaller level businesses or even in mock business situations. I think that if it was shown to have positive results on a smaller level then it should be gradually implemented into the higher level corporations.
It seems that this technology could in fact be reliable if implemented into management systems. However, there is still that question of ‘What if?’ in my mind. I just don’t think that we know enough about this technology and most importantly, about how it works, to say whether or not it is reliable. It could very well be reliable, but taking a look back at history, there were times when people thought things were reliable, but they turned out not to be so. Take a look at the Titanic for example, somewhat of a stretch, but an example of a reliability failure nonetheless. People thought it was unsinkable, but guess what, it wasn’t! I think until we know more about how this whole process of the algorithms and the FPGAs works, we should consider it a big risk and start very slowly.
I think this system could be very advantageous if implemented into management systems. It seems that it would be able to do more, faster, cheaper, and with less people. A system out there that can think for itself and choose the best possible decision without the help of a human; just reading that shows the advantage. A company with this kind of management system would need fewer employees to sit there and make the decisions. However, what if this system were to fail after implemented? The organization could lose a lot of valuable information and this type of failure could consume the company. With fewer employees needed at the company, if a failure occurred I do not think the company could recover from a major incident.
Since computers have evolved, humans have seemed to be pushed to more of a backup role. Computers today can do just about anything that we want and tell them to do. They can perform calculations and store data for us. But if our system crashes we still have the ability to perform these tasks. With artificial intelligence and computers making decisions for themselves, the role of the human almost seems obsolete.
Does this idea scare anyone else? Maybe it’s the fact that i’ve seen too many twilight zone episodes, but AI frightens me. Maybe i just have a little bit of Hillsboro (reference Inherit the Wind) in me, but if we don’t know how something like this works it should not be implemented on a large level.
I realize i should not be afraid of technology (especially cosidering that I’m an IST major), but AI on a widespread level could be extremely beneficial to managerial programs, but does Mr. Thompson know how the AI will react in different settings.
It would be beneficial in the way that if there was some error, the AI would automatically fix it. This is rather than some worker going through the program and sorting the problem out. But would the AI be intuative enough to understand something as abstract as Sales Forecasting? And on that note, would the AI have abstract thoughts, furthermore, would the AI be able to process these thoughts and make even abstract conclusions from them.
But i guess, yes, on the same scale, small experimental scale we should have AI, but i really don’t want to see the Governator kicking down my door.
—