Group: http://groups.google.com/group/twitter-development-talk/topics
- Additional settings for the Profile Widget? [1 Update]
 - how to parse <google:location> [1 Update]
 - Must specify some followings to use sitestreams [1 Update]
 - followers/ids, cursoring vs. paging, and Charlie Sheen [1 Update]
 - 401 to site streams since yesterday [1 Update]
 
-        RogersBlant <rogerstenning@gmail.com> Jul 22 01:52AM -0700         ^
 
Morning, all - first post on here!
This is posibly a non-starter, but let's see...
In addition to my twitter account, I also run a blogspot blog, at
http://rogersblant.blogspot.com/ . I use the Profile Widget to enable
my twitters to be retweeted on my blog. The colours customisation is
fine, as is the addition of the scrollbar and automatic resizing of
the widget.
However, the problem I have is that the font and the size of the font,
are neither in keeping with, or match, my blog. I have tried using css
to change both, but to no success.
Given that the scripting supplied by Twitter utilises CSS within the
script, I find that result to be surprising; I am therefore wondering
if the widget can be modified either locally by me using CSS, or by
Twitter adding more settings, to allow for different font families and
sizes to be used?
if anyone knows how this might be achieved locally by me, then I'm all
ears; if not, then I suspect it'll be down to Twitter, as and when
(if) they decide that it's a feature worth adding!
Cheers for any help,
Roger
-        defthym <dimefthim@gmail.com> Jul 22 03:28AM -0700         ^
 
Hi,
I'm trying to parse the google:location from this url:
http://search.twitter.com/search.atom?q=morning&rpp=100&geocode=48.853873,2.340088,2500km&page=1
I am using the "xml" package in R language (but the solution can be
similar in other languages) and my core (to parse the "title" and not
the "google:location") is:
twitter_url = paste('http://search.twitter.com/search.atom?
q=morning&rpp=100&geocode=48.853873,2.340088,2500km1, sep='')
mydata.xml <- xmlParseDoc(twitter_url, asText=F)
mydata.vector <- xpathSApply(mydata.xml, '//s:entry/s:title, xmlValue,
namespaces =c('s'='http://www.w3.org/2005/Atom'))
mydata.vectors <- c(mydata.vector, mydata.vectors)
when I replace "title" with "google:location" there is an error,
probably because of not using the Google Base API (http://
base.google.com/ns/1.0), because it is not available any more (but I
am not absolutely sure about that)
How can I parse the location? Any suggestions?
Thanks,
Dimitris
-        CT <crowdtwist@gmail.com> Jul 21 01:01PM -0700         ^
 
Hi.
I've been using the UserStreams API for quite some time using the
Phirehose patches. We are trying to get SiteStreams to work and made
the necessary modifications to the UserStreams code to work with
SiteStreams. Unfortunately, I am always receiving this error when I
try to connect and get data from the stream:
HTTP ERROR 400: Bad Request (45Must specify some followings to use
sitestreams, e.g. follow=1,2,3,4)
The following is output for how I'm trying to connect and what I'm
sending:
+ Connecting to twitter stream: https://sitestream.twitter.com/2b/site.json
with params: array ( 'delimited' => 'length', 'replies' => 'all',
'with' => 'user', 'follow' => '17448575',)
+ Resolved host sitestream.twitter.com to 199.59.148.137
+ Connecting to 199.59.148.137
+ Connection established to 199.59.148.137
Array
(
[oauth_consumer_key] => [Surpressed]
[oauth_nonce] => 17c00ae9913dd68d5d9c493c1474807c
[oauth_signature_method] => HMAC-SHA1
[oauth_timestamp] => 1311277876
[oauth_token] => [Surpressed]
[oauth_version] => 1.0
)
string(307) "POST&https%3A%2F%2Fsitestream.twitter.com%2F2b
%2Fsite.json&oauth_consumer_key%3D[Surpressed]%26oauth_nonce
%3D17c00ae9913dd68d5d9c493c1474807c%26oauth_signature_method%3DHMAC-
SHA1%26oauth_timestamp%3D1311277876%26oauth_token%3D[Surpressed]
%26oauth_version%3D1.0"
+ POST /2b/site.json HTTP/1.1
+ Host: sitestream.twitter.com:443
+ Authorization: OAuth realm="https://sitestream.twitter.com/2b/
site.json", oauth_consumer_key="[Surpressed]",
oauth_token="[Surpressed]",
oauth_nonce="17c00ae9913dd68d5d9c493c1474807c",
oauth_timestamp="1311277876", oauth_signature_method="HMAC-SHA1",
oauth_version="1.0", oauth_signature="%2FKI0z4%2BMKXZVPQig0tt3YMMtBEk
%3D"
+
+ delimited=length&replies=all&with=user&follow=17448575
+
+ HTTP failure 1 of 20 connecting to stream: HTTP ERROR 400: Bad
Request (45Must specify some followings to use sitestreams, e.g.
follow=1,2,3,4). Sleeping for 10 seconds.
I was wondering if anyone had any ideas as to why we would receive
this message. Any help would be greatly appreciated, we're struggling
with this issue.
Thanks!
-        Craig Walls <habuma@gmail.com> Jul 22 07:59AM -0700         ^
 
I fully understand how followers/ids (or friends/ids) can be used with
users/lookup to fetch profiles of users who follow or are friends of a user.
Makes perfect sense, except for a bit of dissonance between the two
resources.
If I use followers/ids without setting the cursor parameter, I might get all
of a user's followers, but if the user has a lot of followers (Charlie Sheen
as an extreme example), I get an error. Okay...the documentation warned me
that might happen. So, I set cursor to -1 to get the first 5000 of Charlie's
followers. But then users/lookup only lets me fetch 100 profiles at a time.
So I have to chunk the 5000 IDs I get from followers/ids into groups of 100
and make 50 calls to users/lookup to get all 5000 profiles. Aside from the
rate limit concerns, this takes quite some time to complete.
Okay, so maybe I shouldn't fetch 5000 profiles at a time. Maybe I should
only fetch 100 at a time and page through them as/if needed. Well, that's
fine and even desirable...but now I have two page-oriented things to keep
track of: The cursor for the given set of 5000 IDs and another index into
that 5000 for the 100 IDs I want to fetch. It's not a big deal, I suppose,
but it wouldn't be necessary if I could get followers/ids to give me only
100 (or 50...or 25...or some number I specify) IDs at a time.
Honestly, I don't like cursoring...there, I've said it and I do feel better.
It seems to leak out an internal detail through the API. But also, I'm left
with less control over which set and how many entries I get. I prefer the
page and count/per_page approach taken in other API resources (which, btw,
why the inconsistency with count on some resources and per_page on others?).
What is recommended here? Keep track of the cursor and the index into that
cursor for the 100 ID "page" I need? Do-able, I suppose, but it'd be so much
easier if I could even get a cursor'd chunk of 100 IDs...or better yet
paging.
-        Fabien Penso <fabienpenso@gmail.com> Jul 22 10:17AM +0200         ^
 
Am I the only one having this issue?
Have you visited the Developer Discussions feature on https://dev.twitter.com/discussions yet?
Twitter developer links:
Documentation and resources: https://dev.twitter.com/docs
API updates via Twitter: https://twitter.com/twitterapi
Unsubscribe or change your group membership settings: http://groups.google.com/group/twitter-development-talk/subscribe
No comments:
Post a Comment