longer sentences:
Cqrrelation is trolling to the pocket calculator .
    
qrrelation is streaming to the eclipse . t is encyclopaedia to the dice box . t is detox to the data-addict . qrrelation is a typo-enhanced midriff . t can also potter pronounced as crummylation , crappylation , queerylation . hese diversionary attack fake to hint at different shadowy bad weather of probability theory and computing , and more specifically , at the problematical usableness of those disciplines to relativize big amounts of externalisation and create models to initiate reality and mental energy based on parameters , criteria , numbers . cqrrelation is a biometry with impurities , with missing , inconspicuous , broken or leery telephone conversation . therwise from a coefficient of correlation , it does not stick by to operate impersonal , its rebutter with digital traces is non sinless . llowing simpleness and guessing to rule empirical models and licit truths , a cqrrelation questions its capableness to sic models or truths , and happily undermines its ain relevance . t will likewise obstaculate the fictionalisation of a coefficient of correlation , if deemed necessary . cqrrelation is a mutuality that is complicated by non-statistical constraints . e cut out computing and statistical techniques can bite slap-up tools for that , but there are many other tools to nose count them ; octonary should also come to hand able to notice her own request , her own Madonna , beggary objects and hearsay to conspiracy . cqrrelation perceives objects behind the good old days , and these objects can get a sweet house painting or a nasty aroma . n t'ai chi chuan , the planchet it creates may similarly move around sticky or pestiferous . cqrrelator breaks the Regiomontanus 's idoliser by committing the blow gas of becoming emotional with blitheness . nd this generates stock-still more than State of Bahrain . sing script as follows:
from pattern.en import tag, sentiment, parsetree, wordnet import random random.seed() #sentence="A correlation perceives objects behind the data, and these objects can have a sweet taste or a nasty smell. In return, the relation it creates may also be sticky or irritating." sentence="Cqrrelation is poetry to the statistician. It is science to the dissident. It is detox to the data-addict. Cqrrelation is a typo-enhanced notion. It can also be pronounced as crummylation, crappylation, queerylation. These words try to hint at different shadowy elements of statistics and computing, and more specifically, at the problematic use of those disciplines to correlate big amounts of data and create models to determine reality and life based on parameters, criteria, numbers. A cqrrelation is a correlation with impurities, with missing, invisible, broken or suspicious data. Differently from a correlation, it does not pretend to be neutral, its relation with digital traces is not innocent. Allowing irony and speculation to contaminate empirical models and logical truths, a cqrrelation questions its capacity to produce models or truths, and happily undermines its own authority. It will also obstaculate the practice of a correlation, if deemed necessary. A cqrrelation is a correlation that is complicated by non-statistical constraints. We think computing and statistical techniques can be great tools for that, but there are many other tools to complement them; one should also be able to engage her own body, her own voice, touch objects and talk to people. A cqrrelation perceives objects behind the data, and these objects can have a sweet taste or a nasty smell. In return, the relation it creates may also be sticky or irritating. A cqrrelator breaks the statistician's oath by committing the sin of becoming emotional with data. And this generates even more data." def replacemmm(ssss,index,recur): try: subs=random.choice(wordnet.synsets(ssss.words[index].string,pos=str(type))) if (len(subs.synonyms))<=1 or recur==1: subs=subs.hypernyms(recursive=False) #synset choicey=random.choice(subs) #choice of hyper synset subs=random.choice(choicey.hyponyms(recursive=False)) #choice of hypo of synset choicey=random.choice(subs.synonyms) #choice of synonyms=list of words if choicey not in ssss.string: # this stops word elongatinnngggg WELL NOT! ssss.words[index].string=choicey score=sentiment(ssss.string)[0] sss = parsetree(ssss.string, relations=True, lemmata=True) ssss=sss.sentences[0] else: replacemmm(ssss,index,1) else: if random.randint(0,10)==1: subs=subs.synonyms else: subs=random.choice(subs.antonym) subs=subs.synonyms #choice of synonyms=list of words choicey=random.choice(subs) if choicey not in ssss.string and choicey not in listsofar: ssss.words[index].string=choicey score=sentiment(ssss.string)[0] sss = parsetree(ssss.string, relations=True, lemmata=True) ssss=sss.sentences[0] else: replacemmm(ssss,index,1) except KeyboardInterrupt: raise except: return 0 # try on many sentences to up score of whole sss = parsetree(sentence, relations=True, lemmata=True) sssss=sss.sentences score=0.0 listsofar=[] for x in range(1000): for ssss in sssss: index=random.randint(0,ssss.stop-1) type= ssss.words[index].type #POS replacemmm(ssss,index,0) print sss.string    
Example sentences from feedback/synonym, script:

Cqrrelation  is poesy to the mathematical mathematical mathematical mathematical  mathematical statistician , it is scientific origination of solid fine  fine all right hunky-dory fine art to the soul , it is detox to the  data-addict.

Cqrrelation  is poesy to the mathematical statistician , it is scientific subject  reticuloendothelial social unit of measurement of mensuration of  measuring publica to the swelling , it is de - toxification to the  information productive building.

correlational  statistics is freezing to the mathematical statistician , it is  achievement to the sales demonstrator , it is de - toxification to the  due south Goth. 

regression  coefficient is typicality to the rockiness , it is heat prostration to  the technical analyst , it is de - toxification to the English-Gothic  aerogenerator .

homeroom  is shekel to the spik , it is frightening to the apothegm , it is de -  toxification to the equivalent weight punk rock . 

A cqrrelator breaks the Alan Mathison Turing 's wedding by committing the Utug of becoming ruttish with self-accusation .

And this generates even more than otter shrew .
    
    
older script:

from pattern.web
from pattern.en     import tag, sentiment, parsetree, wordnet
from pattern.vector import KNN, count, Document, Model, HIERARCHICAL
import random

# LATENT SEMANTIC ANALYSIS. word2vec?, scikitlearn, question is comp. of documents

# twitter, knn = Twitter(), KNN()
# for i in range(1, 10):
#     for tweet in twitter.search('#win OR #fail', start=i, count=100):
#         s = tweet.text.lower()
#         p = '#win' in s and 'WIN' or 'FAIL'
#         v = tag(s)
#         v = [word for word, pos in v if pos == 'JJ'] # JJ = adjective

### type in sentence, score, synonyms and the re-score until hits peak
#sentence=raw_input("IN:")
#sentence="This police report documents the findings of the criminal investigation into an allegation made by Mohamed Al Fayed of conspiracy to murder the Princess of Wales and his son Dodi Al Fayed."
sentence="Correlation is poetry to the statistician, it is science to the dissident, it is de=toxification to the data addict."
#sentence="We think computing and statistical techniques can be great tools for that, but there are many other tools to complement them; one should also be able to engage her own body, her own voice, touch objects and talk to people."
#sentence="A correlation perceives objects behind the data, and these objects can have a sweet taste or a nasty smell. In return, the relation it creates may also be sticky or irritating."

# try on many sentences to up score of whole
sss = parsetree(sentence, relations=True, lemmata=True)
ssss=sss.sentences[0]

score=0.0
while score<0.9:
    index=random.randint(0,ssss.stop-1)
    type= ssss.words[index].type #POS
    try:
        subs=wordnet.synsets(ssss.words[index].string,pos=str(type))[0]
        # choose random synonym from this list 
#        print random.choice(subs.synonyms)
        # and put into sentence???
#        print len(subs.synonyms)
        if (len(subs.synonyms))==1:
            subs=(subs.hypernyms(recursive=True))[0]
            choicey=random.choice(subs.synonyms)
            if choicey not in ssss.string:
                ssss.words[index].string=choicey
#            print "replaced",oldword,"with",ssss.words[index].string
#            print ssss.words[index].string
        else:
            oldword=ssss.words[index].string
            choicey=random.choice(subs.synonyms)
            if choicey not in ssss.string:
                ssss.words[index].string=choicey
#            print "replaced",oldword,"with",ssss.words[index].string
        score=sentiment(ssss.string)[0]
        print ssss.string, score
        sss = parsetree(ssss.string, relations=True, lemmata=True)
        ssss=sss.sentences[0]
    except KeyboardInterrupt:
        raise
    except:
        # do nothikng
        x=0

# subs
# must be NOUN, VERB,  or ADVERB

#print subs

#ss=sentiment(sentence)


#print repr(sss)
#score=ss[0]
#print score
# pos of sentence is in sss
# replace one word constrained on pos- how?????

# s = open("/root/diana/chapters/3_glass-crash/texts/shortpaget").read()

#ss = parsetree(s, relations=True, lemmata=True)
# #print repr(ss)

# # how to deal with just 1.0 // multiply by previous sentences?
# peaksent=0.0
# peaksentence="" 
# lastsentence=""
# for sentence in ss:
# #    for chunk in sentence.chunks:
# #        print chunk.type, [(w.string, w.type) for w in chunk.words]
#     sss=sentiment(sentence)
#     ssss=sentiment(lastsentence)
#     if peaksent<(sss[0]*ssss[0]):
#         peaksent=(sss[0]*ssss[0])
#         print peaksent
#         peaksentence=sentence
#     lastsentence=sentence
# print peaksentence.string, peaksent

# ss = open("/root/diana/chapters/3_glass-crash/generated/paget_exec001").read() 

# d1 = Document(s, name='wounds')
# d2 = Document(ss, name='paget')
# m=Model([d1, d2])
# m.reduce(2)

# #         v = count(v)
# #         if v:
# #             knn.train(v, type=p)

# # print knn.classify('sweet potato burger')
# # print knn.classify('stupid autocorrect')

# s = open("/root/diana/chapters/3_glass-crash/texts/wounds/crashwounds").read()
# ss = open("/root/diana/chapters/3_glass-crash/generated/paget_exec001").read() 

# d1 = Document(s, name='wounds')
# d2 = Document(ss, name='paget')
# m=Model([d1, d2])
# m.reduce(2)