Robustness (computer science)


Collective intelligence

  • Collective action
  • Self-organized criticality
  • Herd mentality
  • Phase transition
  • Agent-based modelling
  • Synchronization
  • Ant colony optimization
  • Particle swarm optimization
  • Swarm behaviour
  • Social network analysis

  • Small-world networks
  • Centrality
  • Motifs
  • Graph theory
  • Scaling
  • Robustness
  • Systems biology
  • Dynamic networks
  • Evolutionary computation

  • Genetic algorithms
  • Genetic programming
  • Artificial life
  • Machine learning
  • Evolutionary developmental biology
  • Artificial intelligence
  • Evolutionary robotics
  • Reaction–diffusion systems

  • Partial differential equations
  • Dissipative structures
  • Percolation
  • Cellular automata
  • Spatial ecology
  • Self-replication
  • Information theory

  • Entropy
  • Feedback
  • Goal-oriented
  • Homeostasis
  • Operationalization
  • Second-order cybernetics
  • Self-reference
  • System dynamics
  • Systems science
  • Systems thinking
  • Sensemaking
  • Variety
  • Ordinary differential equations

  • Phase space
  • Attractors
  • Population dynamics
  • Chaos
  • Multistability
  • Bifurcation
  • Rational option theory

  • Bounded rationality
  • In computer science, robustness is the ability of a computer system to cope with errors during implementation and cope with erroneous input. Robustness can encompass many areas of computer science, such(a) as robust programming, robust machine learning, & Robust Security Network. Formal techniques, such as fuzz testing, are necessary to showing robustness since this type of testing involves invalid or unexpected inputs. Alternatively, fault injection can be used to test robustness. Various commercial products perform robustness testing of software analysis.

    Introduction


    In general, building robust systems that encompass every bit of possible failure is difficult because of a vast quantity of possible inputs as well as input combinations. Since any inputs and input combinations would require too much time to test, developers cannot run through any cases exhaustively. Instead, the developer will attempt to generalize such(a) cases. For example, imagine inputting some integer values. Some selected inputs might consist of a negative number, zero, and a positive number. When using these numbers to test software in this way, the developer generalizes the category of all reals into three numbers. This is a more professionals such as lawyers and surveyors and manageable method, but more prone to failure. Generalizing test cases is an example of just one technique to deal with failure—specifically, failure due to invalid user input. Systems loosely may also fail due to other reasons as well, such as disconnecting from a network.

    Regardless, complex systems should still handle any errors encountered gracefully. There are numerous examples of such successful systems. Some of the most robust systems are evolvable and can be easily adapted to new situations.